perm filename COUNTE[W86,JMC]1 blob sn#807053 filedate 1986-01-10 generic text, type C, neo UTF8
COMMENT ⊗   VALID 00003 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00002 00002	counte[w86,jmc]		Counterfactuals and Approximate Theories
C00012 00003	\vfill
C00013 ENDMK
C⊗;
counte[w86,jmc]		Counterfactuals and Approximate Theories

	Artificial intelligence (AI) needs a better theory of counterfactuals
than those proposed by Stalnaker and Lewis.  Their theories are based
considering a counterfactual true if the consequent is true in the
closest possible world to the real world in which the antecedent is
true.  One could criticize this idea philosophically, e.g. by making up
examples in which it gives a truth value for a counterfactual that is
not in accordance with our intuition.  In the present state of artificial
intelligence, where so little can be done, this might not be much of
a criticism.  The theory could be useful to AI, since it covers enough
cases of AI interest, provided it led to programs that computed the
truth values of enough counterfactuals of AI interest.  Unfortunately
it doesn't, because it offers no way of finding the closest possible
world in which the antecedent of a counterfactual is true.  The supposed
applications of it in AI come from using finite models instead of the
real world and its possible counterparts.

	Here is our proposal.  The Stalnaker-Lewis approach makes the
truth of a counterfactual depend only on what the real world is like
and on the metric used to find the closest possible world.  In our
approach the truth of a counterfactual depends on the real world and
an approximate theory of the world.  We get meaningful counterfactuals
when the theory has at least partially a cartesian product structure.

	We begin with the mathematical notion of a counterfactual on
a cartesian product structure.  Suppose, for example, that we have
 a three dimensional Euclidean space provided a distinguished co-ordinate
system and a distinguished point $(x0,y0,z0)$.  Indeed suppose
$x = 1$, $y = 2$, and $z = 3$.  Consider the distance of points
from the origin.  The distance of $(x0,y0,z0)$ from the origin
is $\sqrt(13)$.  Now consider the counterfactual {\it ``If $x$ were
$7$, the distance from the origin would be $20$''.}  This counterfactual
is false, because if $x$ were $7$, the distance from the origin would
be $\sqrt(7↑2 + 2↑2 + 3↑2) = \sqrt(62)$.  The example is clear, because
we have imposed a particular co-ordinate system, i.e. given the space
a particular cartesian product structure in which it is meaningful
to change one co-ordinate without changing the others.  Note that
only certain counterfactuals get a clear meaning.  It is not apparent
how to give a truth value to {\it ``If the distance of $P$ from
$(2,2,2)$ were $5$, its distance from the origin would be $10$''}.
The Stalnaker-Lewis approach could be made to do this, but we don't
need all counterfactuals to be meaningful.  Instead we will try to
use our approximate theories to put a useful set of counterfactuals
into the above exemplified cartesian form.  We'll give a formal
definition of cartesian counterfactual later, but
there isn't much more to it than the example provides.

Approximate theories

	We again base ourselves on an example.  Suppose two ski
instructors observe a beginner fall down, and one of them says,
{\it ``If he had bent his knees he wouldn't have fallen''.  The
other ski instructor disagrees saying, {\it ``No, that wouldn't have
helped, but he wouldn't have fallen if he had put his weight on
his downhill ski in making that turn''}.  Further suppose that
someone has made a videotape of the event, and when the ski
instructors look at the tape, the first agrees that he was mistaken
and the other instructor was right.  How is it possible for them
to agree?  How could we make a computer program that would come
up with the same answer on looking at the videotape?  Certainly
we can't do it by computing directly with possible worlds.

	Our idea is that the ski instructors share an approximate
theory of skiing, and this theory of skiing is sufficiently
computational that we can hope to put something like it in
a computer program.  The theory is like this:

A skier is represented as a certain collection of masses with joints.
As he moves down the slope, he changes the distribution of his mass
by operating his joints.  His progress down the slope is determined
by the slope itself and by how he moves his joints.  If he moves
his joints in certain ways on a certain shaped slope, the theory
says he may fall.
If he moves it in other ways, the theory says he won't fall.
For still other motions the theory is non-commital.  The theories
say nothing about which motions he will make, they only provide
information about his progress as a function of his motion.

	In the particular example, the instructors can agree, because
they observe the slope and the skiers motion and can compute the
effects of motions somewhat different from the ones they observe.
In particular they share a concept of a modified motion in which
the skiers knees were bent and a concept of a modified motion
in which the skier put his weight on his downhill ski.  Given
this modified motion they can compute whether the skier would
fall.  A good enough computer program could do the same --- given
the same theory of skiing and the same observations of the actual
motion and the hill.

	This theory of skiing is approximate in many respects.
First, it is not a theory of the world as a whole; it is only
a theory of skiing.  Second, it takes the skiers motions as
input and offers no prediction of what the skier will actually
do.  It doesn't fit the Stalnaker-Lewis paradigm,
because it doesn't deal with whole possible worlds.  We confine
further railing at the Stalnaker-Lewis paradigm to the appendix.

	Nevertheless, the theory isn't arbitrary.  On the basis
of different experiences, the two ski instructors have come up
with sufficiently similar theories that they often agree on the
cause of a beginner's catastrophe.  However, the theory is based
on their whole experience and is not derivable from the single
observation they are discussing.

	In summary, our truth conditions for counterfactuals are
relative to particular approximate theories that have at least
a partial cartesian structure.  They get their objectivity from
the fact that experience supports particular approximate theories.

	In order to go further we need more precise notions of
cartesian counterfactual and approximate theory.
\vfill
\end